52 research outputs found

    A modified naturalness principle and its experimental tests

    Get PDF
    Motivated by LHC results, we modify the usual criterion for naturalness by ignoring the uncomputable power divergences. The Standard Model satisfies the modified criterion ('finite naturalness') for the measured values of its parameters. Extensions of the SM motivated by observations (Dark Matter, neutrino masses, the strong CP problem, vacuum instability, inflation) satisfy finite naturalness in special ranges of their parameter spaces which often imply new particles below a few TeV. Finite naturalness bounds are weaker than usual naturalness bounds because any new particle with SM gauge interactions gives a finite contribution to the Higgs mass at two loop order.Comment: 17 pages, 3 figures. v3: final version uploaded, references added, numerical error in the last column of table 1 fixe

    THE NEXT TO MINIMAL SUPERSYMMETRIC STANDARD MODEL WITH A MODERATE STOP MASS

    Get PDF
    In this thesis we analize the Next to Minimal Supersymmetric Standard Model (NMSSM), looking for a natural electroweak symmetry breaking. Focusing on a moderate stop mass and requiring perturbative unification of the gauge couplings (which after all is a very good prediction of the MSSM), we put quite severe restrictions on the parameter space. In particular if we do not modify the theory any further we find that we must live with a small Îșâ‰Č0.2\kappa\lesssim 0.2 and λâ‰Č0.7\lambda\lesssim 0.7. Even in this limit a SM like Higgs boson, which is present in the spectrum, barely touches the LEP2 mass bound. We show that we can improve on this situation if we allow vectorlike multiplets of extra SU(5)SU(5) symmetric matter to be present at intermediate energies. This matter, while not disturbing unification, allows for a higher value of λ\lambda at the Fermi scale, in such a way that we are able to obtain a Higgs boson mass as big as 125 GeV125\, GeV. At this point we analyze a particular realization of this picture in the corner of the NMSSM parameter space where Îș=0\kappa=0, thus saturating the upper limit on λ\lambda at the weak scale. Setting to zero the triliner self interaction of the singlet superfield in the superpotential, restores a Peccei-Quinn symmetry in the action which is welcome to solve the ÎŒ\mu problem. A small explicit breaking of such a symmetry is required to give mass to a (pseudo-)Goldstone boson GG which would be otherwise experimentally excluded

    Radiative PQ Breaking and the Higgs Boson Mass

    Full text link
    The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ\sigma statistical uncertainty of ∌\sim 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.Comment: 30 pages, 10 figures; v2, JHEP versio

    Multiverse Dark Matter: SUSY or Axions

    Get PDF
    The observed values of the cosmological constant {\it and} the abundance of Dark Matter (DM) can be successfully understood, using certain measures, by imposing the anthropic requirement that density perturbations go non-linear and virialize to form halos. This requires a probability distribution favoring low amounts of DM, i.e. low values of the PQ scale ff for the QCD axion and low values of the superpartner mass scale m~\tilde{m} for LSP thermal relics. In theories with independent scanning of multiple DM components, there is a high probability for DM to be dominated by a single component. For example, with independent scanning of ff and m~\tilde{m}, TeV-scale LSP DM and an axion solution to the strong CP problem are unlikely to coexist. With thermal LSP DM, the scheme allows an understanding of a Little SUSY Hierarchy with multi-TeV superpartners. Alternatively, with axion DM, PQ breaking before (after) inflation leads to ff typically below (below) the projected range of the current ADMX experiment of f=(3−30)×1011f = (3 - 30) \times 10^{11} GeV, providing strong motivation to develop experimental techniques for probing lower ff.Comment: 32 pages, 14 figures, version published on JHE

    A Fourth Exception in the Calculation of Relic Abundances

    Full text link
    We propose that the dark matter abundance is set by the decoupling of inelastic scattering instead of annihilations. This coscattering mechanism is generically realized if dark matter scatters against states of comparable mass from the thermal bath. Coscattering points to dark matter that is exponentially lighter than the weak scale and has a suppressed annihilation rate, avoiding stringent constraints from indirect detection. Dark matter upscatters into states whose late decays can lead to observable distortions to the blackbody spectrum of the cosmic microwave background.Comment: 8 pages, 6 figures. V3: figure adde

    Non-contrastive sentence representations via self-supervision

    Full text link
    Sample contrastive methods, typically referred to simply as contrastive are the foundation of most unsupervised methods to learn text and sentence embeddings. On the other hand, a different class of self-supervised loss functions and methods have been considered in the computer vision community and referred to as dimension contrastive. In this paper, we thoroughly compare this class of methods with the standard baseline for contrastive sentence embeddings, SimCSE. We find that self-supervised embeddings trained using dimension contrastive objectives can outperform SimCSE on downstream tasks without needing auxiliary loss functions.Comment: Submitted and rejected by EMNLP 2023. Contact the authors for a copy of the "reviews

    Heavy Vector Triplets: Bridging Theory and Data

    Full text link
    We introduce a model-independent strategy to study narrow resonances which we apply to a heavy vector triplet of the Standard Model (SM) group for illustration. The method is based on a simplified phenomenological Lagrangian which reproduces a large class of explicit models. Firstly, this allows us to derive robust model-independent phenomenological features and, conversely, to identify the peculiarities of different explicit realizations. Secondly, limits on cross-section times BR can be converted into bounds on a few relevant parameters in a fully analytic way, allowing for an interpretation in any given explicit model. Based on the available 8 TeV LHC analyses, we derive current limits and interpret them for vector triplets arising in weakly coupled (gauge) and strongly coupled (composite) extensions of the SM. We point out that a model-independent limit setting procedure must be based on purely on-shell quantities, like a cross-section times BR. Finite width effects altering the limits can be considerably reduced by focusing on the on-shell signal region. We illustrate this aspect with a study of the invariant mass distribution in di-lepton searches and the transverse mass distribution in lepton-neutrino final states. In addition to this paper we provide a set of online tools available at a dedicated webpage.Comment: 53 pages, 10 figures; references added, typos corrected; published versio

    New Physics from High Energy Tops

    Full text link
    Precision measurements of high energy top quarks at the LHC constitute a powerful probe of new physics. We study the effect of four fermion operators involving two tops and two light quarks on the high energy tail of the ttˉt\bar t invariant mass distribution. We use existing measurements at a center of mass energy of 13 TeV, and state of the art calculations of the Standard Model contribution, to derive bounds on the coefficients of these operators. We estimate the projected reach of the LHC at higher luminosities and discuss the validity of these limits within the Effective Field Theory description. We find that current measurements constrain the mass scale of these operators to be larger than about 1-2 TeV, while we project that future LHC data will be sensitive to mass scales of about 3-4 TeV. We apply our bounds to constrain composite Higgs models with partial compositeness and models with approximate flavor symmetries. We find our limits to be most relevant to flavor non-universal models with a moderately large coupling of the heavy new physics states to third generation quarks.Comment: 13 pages, 2 appendices, 5 figures, references adde
    • 

    corecore